Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Physiol ; 15: 1288657, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38370011

RESUMEN

Introduction: Magnetic resonance imaging (MRI) enables direct measurements of muscle volume and quality, allowing for an in-depth understanding of their associations with anthropometric traits, and health conditions. However, it is unclear which muscle volume measurements: total muscle volume, regional measurements, measurements of muscle quality: intermuscular adipose tissue (IMAT) or proton density fat fraction (PDFF), are most informative and associate with relevant health conditions such as dynapenia and frailty. Methods: We have measured image-derived phenotypes (IDPs) including total and regional muscle volumes and measures of muscle quality, derived from the neck-to-knee Dixon images in 44,520 UK Biobank participants. We further segmented paraspinal muscle from 2D quantitative MRI to quantify muscle PDFF and iron concentration. We defined dynapenia based on grip strength below sex-specific cut-off points and frailty based on five criteria (weight loss, exhaustion, grip strength, low physical activity and slow walking pace). We used logistic regression to investigate the association between muscle volume and quality measurements and dynapenia and frailty. Results: Muscle volumes were significantly higher in male compared with female participants, even after correcting for height while, IMAT (corrected for muscle volume) and paraspinal muscle PDFF were significantly higher in female compared with male participants. From the overall cohort, 7.6% (N = 3,261) were identified with dynapenia, and 1.1% (N = 455) with frailty. Dynapenia and frailty were positively associated with age and negatively associated with physical activity levels. Additionally, reduced muscle volume and quality measurements were associated with both dynapenia and frailty. In dynapenia, muscle volume IDPs were most informative, particularly total muscle exhibiting odds ratios (OR) of 0.392, while for frailty, muscle quality was found to be most informative, in particular thigh IMAT volume indexed to height squared (OR = 1.396), both with p-values below the Bonferroni-corrected threshold (p<8.8×10-5). Conclusion: Our fully automated method enables the quantification of muscle volumes and quality suitable for large population-based studies. For dynapenia, muscle volumes particularly those including greater body coverage such as total muscle are the most informative, whilst, for frailty, markers of muscle quality were the most informative IDPs. These results suggest that different measurements may have varying diagnostic values for different health conditions.

2.
BMC Nephrol ; 24(1): 362, 2023 12 06.
Artículo en Inglés | MEDLINE | ID: mdl-38057740

RESUMEN

BACKGROUND: Organ measurements derived from magnetic resonance imaging (MRI) have the potential to enhance our understanding of the precise phenotypic variations underlying many clinical conditions. METHODS: We applied morphometric methods to study the kidneys by constructing surface meshes from kidney segmentations from abdominal MRI data in 38,868 participants in the UK Biobank. Using mesh-based analysis techniques based on statistical parametric maps (SPMs), we were able to detect variations in specific regions of the kidney and associate those with anthropometric traits as well as disease states including chronic kidney disease (CKD), type-2 diabetes (T2D), and hypertension. Statistical shape analysis (SSA) based on principal component analysis was also used within the disease population and the principal component scores were used to assess the risk of disease events. RESULTS: We show that CKD, T2D and hypertension were associated with kidney shape. Age was associated with kidney shape consistently across disease groups. Body mass index (BMI) and waist-to-hip ratio (WHR) were also associated with kidney shape for the participants with T2D. Using SSA, we were able to capture kidney shape variations, relative to size, angle, straightness, width, length, and thickness of the kidneys, within disease populations. We identified significant associations between both left and right kidney length and width and incidence of CKD (hazard ratio (HR): 0.74, 95% CI: 0.61-0.90, p < 0.05, in the left kidney; HR: 0.76, 95% CI: 0.63-0.92, p < 0.05, in the right kidney) and hypertension (HR: 1.16, 95% CI: 1.03-1.29, p < 0.05, in the left kidney; HR: 0.87, 95% CI: 0.79-0.96, p < 0.05, in the right kidney). CONCLUSIONS: The results suggest that shape-based analysis of the kidneys can augment studies aiming at the better categorisation of pathologies associated with chronic kidney conditions.


Asunto(s)
Diabetes Mellitus Tipo 2 , Hipertensión , Insuficiencia Renal Crónica , Humanos , Riñón/diagnóstico por imagen , Antropometría , Insuficiencia Renal Crónica/diagnóstico por imagen , Insuficiencia Renal Crónica/epidemiología , Índice de Masa Corporal , Hipertensión/diagnóstico por imagen , Hipertensión/epidemiología , Diabetes Mellitus Tipo 2/diagnóstico por imagen , Diabetes Mellitus Tipo 2/epidemiología , Factores de Riesgo
3.
PLoS One ; 18(4): e0283506, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37053189

RESUMEN

The main drivers of COVID-19 disease severity and the impact of COVID-19 on long-term health after recovery are yet to be fully understood. Medical imaging studies investigating COVID-19 to date have mostly been limited to small datasets and post-hoc analyses of severe cases. The UK Biobank recruited recovered SARS-CoV-2 positive individuals (n = 967) and matched controls (n = 913) who were extensively imaged prior to the pandemic and underwent follow-up scanning. In this study, we investigated longitudinal changes in body composition, as well as the associations of pre-pandemic image-derived phenotypes with COVID-19 severity. Our longitudinal analysis, in a population of mostly mild cases, associated a decrease in lung volume with SARS-CoV-2 positivity. We also observed that increased visceral adipose tissue and liver fat, and reduced muscle volume, prior to COVID-19, were associated with COVID-19 disease severity. Finally, we trained a machine classifier with demographic, anthropometric and imaging traits, and showed that visceral fat, liver fat and muscle volume have prognostic value for COVID-19 disease severity beyond the standard demographic and anthropometric measurements. This combination of image-derived phenotypes from abdominal MRI scans and ensemble learning to predict risk may have future clinical utility in identifying populations at-risk for a severe COVID-19 outcome.


Asunto(s)
COVID-19 , Humanos , COVID-19/diagnóstico por imagen , SARS-CoV-2 , Pronóstico , Tomografía Computarizada por Rayos X , Composición Corporal
4.
Nat Commun ; 12(1): 1613, 2021 03 12.
Artículo en Inglés | MEDLINE | ID: mdl-33712588

RESUMEN

Computational methods have made substantial progress in improving the accuracy and throughput of pathology workflows for diagnostic, prognostic, and genomic prediction. Still, lack of interpretability remains a significant barrier to clinical integration. We present an approach for predicting clinically-relevant molecular phenotypes from whole-slide histopathology images using human-interpretable image features (HIFs). Our method leverages >1.6 million annotations from board-certified pathologists across >5700 samples to train deep learning models for cell and tissue classification that can exhaustively map whole-slide images at two and four micron-resolution. Cell- and tissue-type model outputs are combined into 607 HIFs that quantify specific and biologically-relevant characteristics across five cancer types. We demonstrate that these HIFs correlate with well-known markers of the tumor microenvironment and can predict diverse molecular signatures (AUROC 0.601-0.864), including expression of four immune checkpoint proteins and homologous recombination deficiency, with performance comparable to 'black-box' methods. Our HIF-based approach provides a comprehensive, quantitative, and interpretable window into the composition and spatial architecture of the tumor microenvironment.


Asunto(s)
Neoplasias/clasificación , Neoplasias/diagnóstico por imagen , Neoplasias/patología , Patología Molecular/métodos , Fenotipo , Algoritmos , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Medicina de Precisión , Microambiente Tumoral
5.
IEEE Trans Pattern Anal Mach Intell ; 41(12): 2835-2845, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-30188814

RESUMEN

Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in, turning some face areas red; or when one goes white in fear as a result of the drainage of blood from the face. Surprisingly, these image properties have not been exploited to recognize the facial action units (AUs) associated with these expressions. Herein, we present the first system to do recognition of AUs and their intensities using these functional color changes. These color features are shown to be robust to changes in identity, gender, race, ethnicity, and skin color. Specifically, we identify the chromaticity changes defining the transition of an AU from inactive to active and use an innovative Gabor transform-based algorithm to gain invariance to the timing of these changes. Because these image changes are given by functions rather than vectors, we use functional classifiers to identify the most discriminant color features of an AU and its intensities. We demonstrate that, using these discriminant color features, one can achieve results superior to those of the state-of-the-art. Finally, we define an algorithm that allows us to use the learned functional color representation in still images. This is done by learning the mapping between images and the identified functional color features in videos. Our algorithm works in realtime, i.e., 30 frames/second/CPU thread.


Asunto(s)
Cara , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Algoritmos , Color , Emociones/clasificación , Emociones/fisiología , Cara/anatomía & histología , Cara/diagnóstico por imagen , Cara/fisiología , Humanos , Pigmentación de la Piel/fisiología , Grabación en Video
6.
Proc Natl Acad Sci U S A ; 115(14): 3581-3586, 2018 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-29555780

RESUMEN

Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.


Asunto(s)
Color , Emociones/fisiología , Cara/fisiología , Expresión Facial , Músculos Faciales/fisiología , Reconocimiento Visual de Modelos , Adulto , Femenino , Humanos , Masculino , Adulto Joven
7.
J Neurosci ; 36(16): 4434-42, 2016 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-27098688

RESUMEN

By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT: Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment.


Asunto(s)
Encéfalo/metabolismo , Expresión Facial , Reconocimiento Facial/fisiología , Estimulación Luminosa/métodos , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...